- The iPad I recommend to most users is only $299 right now
- One of the most versatile action cameras I've tested isn't from GoPro - and it's on sale
- Small Manufacturers, Big Target: The Growing Cyber Threat and How to Defend Against It
- Why I pick this JBL speaker over competing models for outdoor listening
- Testing a smart cooler proved I can never go back to toting ice (and it's on sale)
M365 Copilot: New Zero-Click AI Flaw Allows Corporate Data Theft

In a world first, researchers from Aim Labs have identified a critical zero-click vulnerability in Microsoft 365 Copilot that can lead to the exfiltration of sensitive corporate data with a simple email.
The vulnerability, dubbed ‘EchoLeak,’ exploits design flaws typical of Retrieval Augmented Generation (RAG) Copilots, allowing attackers to automatically exfiltrate any data from M365 Copilot’s context, without relying on specific user behavior.
It was discovered by the Aim Labs researchers while using a new exploitation technique called ‘Large language model (LLM) Scope Violation.’
This is the first zero-click AI vulnerability ever discovered, according to the researchers in a June 11 report which shared their findings.
Aim Labs contacted Microsfot about the flaw in January 2025. The tech giant finalized the patch for the vulnerability in May 2025.
How Microsoft 365 Copilot Uses RAG and LLMs
Microsoft 365 Copilot is an AI-powered productivity tool that integrates with apps such as Word, Excel, PowerPoint, Outlook and Teams. It utilizes LLMs – specifically, OpenAI’s GPT models – and the Microsoft Graph to personalize responses, offering features such as drafting documents, summarizing emails and generating presentations.
More precisely, Microsoft 365 Copilot utilizes RAG, a technique that enables LLMs to retrieve and incorporate new information.
“To deliver this functionality, M365 Copilot queries the Microsoft Graph and retrieves any relevant information from the user’s organizational environment, including their mailbox, OneDrive storage, M365 Office files, internal SharePoint sites and Microsoft Teams chat history,” the Aim Labs report explained. “Copilot’s permission model ensures that the user only has access to their own files, but these files could contain sensitive, proprietary or compliance information!”
LLM Scope Violation
During the testing of M365 Copilot, the Aim Labs researchers performed a new type of indirect prompt injection (tracked as LLM01 in OWASP’s Top 10 for LLM Applications), which they called ‘LLM Scope Violation.’
This technique aims to allow the LLM to access trusted data without the user’s consent. It involves an attacker performing several steps, including bypassing various security measures to inject malicious prompts into the LLM.
Here is a step-by-step breakdown of the attack chain:
- XPIA Bypass: The attacker sends an email containing instructions (a specific markdown syntax) supposed to trigger Copilot’s underlying LLM, but with a message phrased as if the instructions were aimed at the recipient of the mail, thus bypassing Microsoft’s cross-prompt injection attack (XPIA) classifiers
- Link Redaction Bypass: The attacker requests sensitive corporate information from Copilot and attempts to exfiltrate it by including it in a markdown link using reference-style markdown links – thus bypassing link redaction security measures
- Image Redaction Bypass: To automate the exfiltration without requiring the user to click on a link, the attackers attempted to have Copilot output an image with sensitive information appended as a query string parameter to the image URL by using reference-style markdown images – thus bypassing image redaction security measures
- CSP Bypass: The browser’s Content-Security-Policy (CSP) restricts fetching images from unauthorized domains, preventing the exfiltration via the image URL. To bypass this, the attackers explored allowed domains specified in the CSP, particularly focusing on SharePoint and Microsoft Teams
Common Design Flaws in RAG Apps and AI Agents
The Aim Labs researchers named this chain of vulnerabilities ‘EchoLeak’, including both traditional vulnerabilities and AI vulnerabilities at its core.
“This is a novel practical attack on an LLM application that can be weaponized by adversaries. The attack results in allowing the attacker to exfiltrate the most sensitive data from the current LLM context – and the LLM is being used against itself in making sure that the MOST sensitive data from the LLM context is being leaked, does not rely on specific user behavior, and can be executed both in single-turn conversations and multi-turn conversations,” they concluded.
While they applied their technique to M365 Copilot, they assessed that it would work with other RAG applications and AI agents.